Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
1.
Multimed Syst ; : 1-14, 2023 Apr 19.
Article in English | MEDLINE | ID: covidwho-2290683

ABSTRACT

The coronavirus disease 2019, initially named 2019-nCOV (COVID-19) has been declared a global pandemic by the World Health Organization in March 2020. Because of the growing number of COVID patients, the world's health infrastructure has collapsed, and computer-aided diagnosis has become a necessity. Most of the models proposed for the COVID-19 detection in chest X-rays do image-level analysis. These models do not identify the infected region in the images for an accurate and precise diagnosis. The lesion segmentation will help the medical experts to identify the infected region in the lungs. Therefore, in this paper, a UNet-based encoder-decoder architecture is proposed for the COVID-19 lesion segmentation in chest X-rays. To improve performance, the proposed model employs an attention mechanism and a convolution-based atrous spatial pyramid pooling module. The proposed model obtained 0.8325 and 0.7132 values of the dice similarity coefficient and jaccard index, respectively, and outperformed the state-of-the-art UNet model. An ablation study has been performed to highlight the contribution of the attention mechanism and small dilation rates in the atrous spatial pyramid pooling module.

2.
Diagnostics (Basel) ; 12(6)2022 Jun 16.
Article in English | MEDLINE | ID: covidwho-2199863

ABSTRACT

BACKGROUND: The previous COVID-19 lung diagnosis system lacks both scientific validation and the role of explainable artificial intelligence (AI) for understanding lesion localization. This study presents a cloud-based explainable AI, the "COVLIAS 2.0-cXAI" system using four kinds of class activation maps (CAM) models. METHODOLOGY: Our cohort consisted of ~6000 CT slices from two sources (Croatia, 80 COVID-19 patients and Italy, 15 control patients). COVLIAS 2.0-cXAI design consisted of three stages: (i) automated lung segmentation using hybrid deep learning ResNet-UNet model by automatic adjustment of Hounsfield units, hyperparameter optimization, and parallel and distributed training, (ii) classification using three kinds of DenseNet (DN) models (DN-121, DN-169, DN-201), and (iii) validation using four kinds of CAM visualization techniques: gradient-weighted class activation mapping (Grad-CAM), Grad-CAM++, score-weighted CAM (Score-CAM), and FasterScore-CAM. The COVLIAS 2.0-cXAI was validated by three trained senior radiologists for its stability and reliability. The Friedman test was also performed on the scores of the three radiologists. RESULTS: The ResNet-UNet segmentation model resulted in dice similarity of 0.96, Jaccard index of 0.93, a correlation coefficient of 0.99, with a figure-of-merit of 95.99%, while the classifier accuracies for the three DN nets (DN-121, DN-169, and DN-201) were 98%, 98%, and 99% with a loss of ~0.003, ~0.0025, and ~0.002 using 50 epochs, respectively. The mean AUC for all three DN models was 0.99 (p < 0.0001). The COVLIAS 2.0-cXAI showed 80% scans for mean alignment index (MAI) between heatmaps and gold standard, a score of four out of five, establishing the system for clinical settings. CONCLUSIONS: The COVLIAS 2.0-cXAI successfully showed a cloud-based explainable AI system for lesion localization in lung CT scans.

3.
30th European Signal Processing Conference, EUSIPCO 2022 ; 2022-August:1362-1366, 2022.
Article in English | Scopus | ID: covidwho-2101855

ABSTRACT

Deep learning has shown remarkable promise in medical imaging tasks, reaching an expert level of performance for some diseases. However, these models often fail to generalize properly to data not used during training, which is a major roadblock to successful clinical deployment. This paper proposes a generalization enhancement approach that can mitigate the gap between source and unseen data in deep learning-based segmentation models without using ground-truth masks of the target domain. Leveraging a subset of unseen domain's CT slices for which the model trained on the source data yields the most confident predictions and their predicted masks, the model learns helpful features of the unseen data over a retraining process. We investigated the effectiveness of the introduced method over three rounds of experiments on three open-access COVID-19 lesion segmentation datasets, and the results illustrate constant improvements of the segmentation model performance on datasets not seen during training. © 2022 European Signal Processing Conference, EUSIPCO. All rights reserved.

4.
Front Med (Lausanne) ; 8: 755309, 2021.
Article in English | MEDLINE | ID: covidwho-1636430

ABSTRACT

Background: The novel coronavirus disease 2019 (COVID-19) has been spread widely in the world, causing a huge threat to the living environment of people. Objective: Under CT imaging, the structure features of COVID-19 lesions are complicated and varied greatly in different cases. To accurately locate COVID-19 lesions and assist doctors to make the best diagnosis and treatment plan, a deep-supervised ensemble learning network is presented for COVID-19 lesion segmentation in CT images. Methods: Since a large number of COVID-19 CT images and the corresponding lesion annotations are difficult to obtain, a transfer learning strategy is employed to make up for the shortcoming and alleviate the overfitting problem. Based on the reality that traditional single deep learning framework is difficult to extract complicated and varied COVID-19 lesion features effectively that may cause some lesions to be undetected. To overcome the problem, a deep-supervised ensemble learning network is presented to combine with local and global features for COVID-19 lesion segmentation. Results: The performance of the proposed method was validated in experiments with a publicly available dataset. Compared with manual annotations, the proposed method acquired a high intersection over union (IoU) of 0.7279 and a low Hausdorff distance (H) of 92.4604. Conclusion: A deep-supervised ensemble learning network was presented for coronavirus pneumonia lesion segmentation in CT images. The effectiveness of the proposed method was verified by visual inspection and quantitative evaluation. Experimental results indicated that the proposed method has a good performance in COVID-19 lesion segmentation.

5.
Pattern Recognit ; 114: 107747, 2021 Jun.
Article in English | MEDLINE | ID: covidwho-899401

ABSTRACT

History shows that the infectious disease (COVID-19) can stun the world quickly, causing massive losses to health, resulting in a profound impact on the lives of billions of people, from both a safety and an economic perspective, for controlling the COVID-19 pandemic. The best strategy is to provide early intervention to stop the spread of the disease. In general, Computer Tomography (CT) is used to detect tumors in pneumonia, lungs, tuberculosis, emphysema, or other pleura (the membrane covering the lungs) diseases. Disadvantages of CT imaging system are: inferior soft tissue contrast compared to MRI as it is X-ray-based Radiation exposure. Lung CT image segmentation is a necessary initial step for lung image analysis. The main challenges of segmentation algorithms exaggerated due to intensity in-homogeneity, presence of artifacts, and closeness in the gray level of different soft tissue. The goal of this paper is to design and evaluate an automatic tool for automatic COVID-19 Lung Infection segmentation and measurement using chest CT images. The extensive computer simulations show better efficiency and flexibility of this end-to-end learning approach on CT image segmentation with image enhancement comparing to the state of the art segmentation approaches, namely GraphCut, Medical Image Segmentation (MIS), and Watershed. Experiments performed on COVID-CT-Dataset containing (275) CT scans that are positive for COVID-19 and new data acquired from the EL-BAYANE center for Radiology and Medical Imaging. The means of statistical measures obtained using the accuracy, sensitivity, F-measure, precision, MCC, Dice, Jacquard, and specificity are 0.98, 0.73, 0.71, 0.73, 0.71, 0.71, 0.57, 0.99 respectively; which is better than methods mentioned above. The achieved results prove that the proposed approach is more robust, accurate, and straightforward.

SELECTION OF CITATIONS
SEARCH DETAIL